130 research outputs found
Using neural networks to estimate redshift distributions. An application to CFHTLenS
We present a novel way of using neural networks (NN) to estimate the redshift
distribution of a galaxy sample. We are able to obtain a probability density
function (PDF) for each galaxy using a classification neural network. The
method is applied to 58714 galaxies in CFHTLenS that have spectroscopic
redshifts from DEEP2, VVDS and VIPERS. Using this data we show that the stacked
PDF's give an excellent representation of the true using information
from 5, 4 or 3 photometric bands. We show that the fractional error due to
using N(z_(phot)) instead of N(z_(truth)) is <=1 on the lensing power spectrum
P_(kappa) in several tomographic bins. Further we investigate how well this
method performs when few training samples are available and show that in this
regime the neural network slightly overestimates the N(z) at high z. Finally
the case where the training sample is not representative of the full data set
is investigated. An IPython notebook accompanying this paper is made available
here: https://bitbucket.org/christopher_bonnett/nn_noteboo
Anomaly detection for machine learning redshifts applied to SDSS galaxies
We present an analysis of anomaly detection for machine learning redshift
estimation. Anomaly detection allows the removal of poor training examples,
which can adversely influence redshift estimates. Anomalous training examples
may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies
with one or more poorly measured photometric quantity. We select 2.5 million
'clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730
'anomalous' galaxies with spectroscopic redshift measurements which are flagged
as unreliable. We contaminate the clean base galaxy sample with galaxies with
unreliable redshifts and attempt to recover the contaminating galaxies using
the Elliptical Envelope technique. We then train four machine learning
architectures for redshift analysis on both the contaminated sample and on the
preprocessed 'anomaly-removed' sample and measure redshift statistics on a
clean validation sample generated without any preprocessing. We find an
improvement on all measured statistics of up to 80% when training on the
anomaly removed sample as compared with training on the contaminated sample for
each of the machine learning routines explored. We further describe a method to
estimate the contamination fraction of a base data sample.Comment: 13 pages, 8 figures, 1 table, minor text updates to macth MNRAS
accepted versio
Fixing an Accident of History: Assessing a social insurance model to achieve adequate universal drug insurance
Canadaâs drug insurance system remains fragmented and expensive. The plethora of private and public plans still do not adequately cover all Canadians. Demographic changes, disease profiles and the introduction of many new very high cost drugs mean Canadians will be increasingly unable to afford access to medically necessary drug therapy.
The lack of adequate universal drug insurance, a policy known in Canada as national pharmacare (NPh), is an important problem. A solution must address patient needs for access and equity at a cost that is sustainable to payers. Most academic literature has identified a public single payer model as the optimal approach but despite many studies and reports, this has yet to be implemented. This suggests the need for an alternative.
The Canadian alternative to a single payer plan is popularly called âfill the gaps.â This approach argues that the federal government should target its funding to expand provincial coverage to cover the uninsured. A social drug insurance model, used in many European countries, retains private insurance but improves on it by regulating, structuring and aligning it with medicare. As such it is the logical proxy for those who favour âfill the gap.â
Social insurance may achieve adequate universal drug insurance at a much lower per capita cost. Social insurance financing relies primarily on employers and workers, which is very similar to our very large private drug insurance market. However, private drug plans are mostly tied to employment, are voluntary and face similar threats to sustainability and affordability as provincial plans. An organized mixed-financing model like social drug insurance would spread risk and is a more feasible approach than a single payer plan to achieve adequate universal drug insurance. If a comprehensive single payer NPh plan is not implemented, adopting key social insurance features and regulations would significantly improve access and quality.
An extensive literature review is presented, followed by a qualitative thematic analysis of drug insurance opinion leader interviews and a comparative analysis of health and drug systems in three jurisdictions. The theoretical perspective of how a government may recognize and prioritize certain problems over others draws on John Kingdonâs Agendas, Alternatives, and Public Policies (2011).
Major findings are found in four chapters. Chapter 4 examines our current shared-funding model for prescription drugs and establishes that social drug insurance could work in the context of our institutionalized model.
Chapter 5 provides a comparative review of social health insurance (SHI) models in Germany, the Netherlands and Quebec to help identify the form and features useful in a pan-Canadian system. Certain features warrant serious consideration in Canada. The Netherlands has created an aggressive drug price and cost control architecture. Germanyâs Federal Joint Committee is a participatory multi-stakeholder governance model that could improve transparency and better address system complexity and sustainability. Quebec provides 20 years of guidance on key drug system features and risks in a Canadian context.
Chapters 6 and 7 present the most important findings from 26 interviews with drug plan experts and influencers in different sectors from across Canada. Thematic analysis identified five roles for the federal government: funder, coordinator/secretariat, leader, relationship manager and nation-builder. These can be operationalized as funding that ensures adequate universal access, the creation of national standards for a list of covered drugs (formulary) and for patient cost-sharing to limit personal financial risk. Most participants wanted private insurance to continue as a significant funder but private payers remain marginalized, âoutside the tentâ in this policy debate. Employers are important funders and have not been extensively consulted. Relationships and trust among key stakeholders are limited or weak. These are crucial constraints to progress.
Kingdonâs model, explored in Chapter 8, indicates the problem, policy and political streams are no longer aligned, so a window of opportunity for universal drug insurance has become far less likely. However, the window could re-open with less ideological stances by advocates supporting either model. This requires a standing forum for constructive and time-bounded dialogue to produce a strategy, funding structure, a realistic implementation plan and a modern governance model. Participants recommended the federal government play this leadership role. An influential NPh policy entrepreneur would energize this work.
Related structural changes have recently been proposed to reduce drug prices and costs, such as reform of the Patented Medicine Prices Review Board, the creation of a new Canadian Drug Agency and a strategy for drugs for rare diseases. These financial changes are very important, but are not a comprehensive reform and will not in themselves assure adequate universal drug insurance. NPh is still needed and must proceed in parallel given its complexity and long gestation period.
A carefully designed social drug insurance model that includes a regulated role for private drug plans would spread risk among payers, reflect our social values, provide choice in coverage and enable access to sophisticated consumer- and patient-focused insurer technology that complements public plan expertise in health technology assessment. Mitigating financial and political risk is important to governments, and moving to implementation is important to patients.
Social drug insurance is still a very complex, multi-year change, meaning careful, consultative planning for implementation and transition is crucial. This model could serve as a template to achieve universal funding for other core health services, such as long-term care and community care.
Kingdon notes policy change is more likely if complexity, confounding information and financial and reputational risk for governments can be reduced. Rather than a polarized, dichotomous and perpetual debate typical of decades of NPh failure, a classic Canadian compromise may be possible that allows adequate universal drug insurance to be more quickly implemented through a hybrid model
Memory Inflation during Chronic Viral Infection Is Maintained by Continuous Production of Short-Lived, Functional T Cells
SummaryDuring persistent murine cytomegalovirus (MCMV) infection, the TÂ cell response is maintained at extremely high intensity for the life of the host. These cells closely resemble human CMV-specific cells, which compose a major component of the peripheral TÂ cell compartment in most people. Despite a phenotype that suggests extensive antigen-driven differentiation, MCMV-specific TÂ cells remain functional and respond vigorously to viral challenge. We hypothesized that a low rate of antigen-driven proliferation would account for the maintenance of this population. Instead, we found that most of these cells divided only sporadically in chronically infected hosts and had a short half-life in circulation. The overall population was supported, at least in part, by memory TÂ cells primed early in infection, as well as by recruitment of naive TÂ cells at late times. Thus, these data show that memory inflation is maintained by a continuous replacement of short-lived, functional cells during chronic MCMV infection
Cross-Presentation of a Spread-Defective MCMV Is Sufficient to Prime the Majority of Virus-Specific CD8+ T Cells
CD8+ T cells can be primed by peptides derived from endogenous proteins (direct presentation), or exogenously acquired protein (cross-presentation). However, the relative ability of these two pathways to prime CD8+ T cells during a viral infection remains controversial. Cytomegaloviruses (CMVs) can infect professional antigen presenting cells (APCs), including dendritic cells, thus providing peptides for direct presentation. However, the viral immune evasion genes profoundly impair recognition of infected cells by CD8+ T cells. Nevertheless, CMV infection elicits a very strong CD8+ T cell response, prompting its recent use as a vaccine vector. We have shown previously that deleting the immune evasion genes from murine cytomegalovirus (MCMV) that target class I MHC presentation, has no impact on the size or breadth of the CD8+ T cell response elicited by infection, suggesting that the majority of MCMV-specific CD8+ T cells in vivo are not directly primed by infected professional APCs. Here we use a novel spread-defective mutant of MCMV, lacking the essential glycoprotein gL, to show that cross-presentation alone can account for the majority of MCMV-specific CD8+ T cell responses to the virus. Our data support the conclusion that cross-presentation is the primary mode of antigen presentation by which CD8+ T cells are primed during MCMV infection
CFHTLenS: Co-evolution of galaxies and their dark matter haloes
Galaxy-galaxy weak lensing is a direct probe of the mean matter distribution
around galaxies. The depth and sky coverage of the CFHT Legacy Survey yield
statistically significant galaxy halo mass measurements over a much wider range
of stellar masses ( to ) and redshifts () than previous weak lensing studies. At redshift , the
stellar-to-halo mass ratio (SHMR) reaches a maximum of percent as a
function of halo mass at . We find, for the first
time from weak lensing alone, evidence for significant evolution in the SHMR:
the peak ratio falls as a function of cosmic time from percent at
to percent at , and shifts to lower
stellar mass haloes. These evolutionary trends are dominated by red galaxies,
and are consistent with a model in which the stellar mass above which star
formation is quenched "downsizes" with cosmic time. In contrast, the SHMR of
blue, star-forming galaxies is well-fit by a power law that does not evolve
with time. This suggests that blue galaxies form stars at a rate that is
balanced with their dark matter accretion in such a way that they evolve along
the SHMR locus. The redshift dependence of the SHMR can be used to constrain
the evolution of the galaxy population over cosmic time.Comment: 18 pages, MNRAS, in pres
CFHTLenS: co-evolution of galaxies and their dark matter haloes
Galaxy-galaxy weak lensing is a direct probe of the mean matter distribution around galaxies. The depth and sky coverage of the Canada-France-Hawaii Telescope Legacy Survey yield statistically significant galaxy halo mass measurements over a much wider range of stellar masses (108.75 to 1011.3âMâ) and redshifts (0.2<z<0.8) than previous weak lensing studies. At redshift zâŒ0.5, the stellar-to-halo mass ratio (SHMR) reaches a maximum of 4.0±0.2âperâcent as a function of halo mass at âŒ1012.25âMâ. We find, for the first time from weak lensing alone, evidence for significant evolution in the SHMR: the peak ratio falls as a function of cosmic time from 4.5±0.3âperâcent at zâŒ0.7 to 3.4±0.2âperâcent at zâŒ0.3, and shifts to lower stellar mass haloes. These evolutionary trends are dominated by red galaxies, and are consistent with a model in which the stellar mass above which star formation is quenched âdownsizes' with cosmic time. In contrast, the SHMR of blue, star-forming galaxies is well fitted by a power law that does not evolve with time. This suggests that blue galaxies form stars at a rate that is balanced with their dark matter accretion in such a way that they evolve along the SHMR locus. The redshift dependence of the SHMR can be used to constrain the evolution of the galaxy population over cosmic tim
CFHTLenS: the environmental dependence of galaxy halo masses from weak lensing
We use weak gravitational lensing to analyse the dark matter haloes around satellite galaxies in galaxy groups in the CanadaâFranceâHawaii Telescope Lensing Survey (CFHTLenS) data set. This data set is derived from the CanadaâFranceâHawaii Telescope Legacy Survey Wide survey, and encompasses 154 deg^2 of high-quality shape data. Using the photometric redshifts, we divide the sample of lens galaxies with stellar masses in the range 10^(9)â10^(10.5)âM_â into those likely to lie in high-density environments (HDE) and those likely to lie in low-density environments (LDE). Through comparison with galaxy catalogues extracted from the Millennium Simulation, we show that the sample of HDE galaxies should primarily (âŒ61âperâcent) consist of satellite galaxies in groups, while the sample of LDE galaxies should consist of mostly (âŒ87âperâcent) non-satellite (field and central) galaxies. Comparing the lensing signals around samples of HDE and LDE galaxies matched in stellar mass, the lensing signal around HDE galaxies clearly shows a positive contribution from their host groups on their lensing signals at radii of âŒ500â1000 kpc, the typical separation between satellites and group centres. More importantly, the subhaloes of HDE galaxies are less massive than those around LDE galaxies by a factor of 0.65 ± 0.12, significant at the 2.9Ï level. A natural explanation is that the haloes of satellite galaxies are stripped through tidal effects in the group environment. Our results are consistent with a typical tidal truncation radius of âŒ40 kpc
CFHTLenS: combined probe cosmological model comparison using 2D weak gravitational lensing
We present cosmological constraints from 2D weak gravitational lensing by the large-scale structure in the CanadaâFranceâHawaii Telescope Lensing Survey (CFHTLenS) which spans 154 deg^2 in five optical bands. Using accurate photometric redshifts and measured shapes for 4.2 million galaxies between redshifts of 0.2 and 1.3, we compute the 2D cosmic shear correlation function over angular scales ranging between 0.8 and 350 arcmin. Using non-linear models of the dark-matter power spectrum, we constrain cosmological parameters by exploring the parameter space with Population Monte Carlo sampling. The best constraints from lensing alone are obtained for the small-scale density-fluctuations amplitude Ï_8 scaled with the total matter density Ωm. For a flat Îcold dark matter (ÎCDM) model we obtain Ï_8(Ω_m/0.27)0.6 = 0.79 ± 0.03.
We combine the CFHTLenS data with 7-year Wilkinson Microwave Anisotropy Probe (WMAP7), baryonic acoustic oscillations (BAO): SDSS-III (BOSS) and a Hubble Space Telescope distance-ladder prior on the Hubble constant to get joint constraints. For a flat ÎCDM model, we find Ω_m = 0.283 ± 0.010 and Ï_8 = 0.813 ± 0.014. In the case of a curved wCDM universe, we obtain Ω_m = 0.27 ± 0.03, Ï_8 = 0.83 ± 0.04, w0 = â1.10 ± 0.15 and Ω_K = 0.006^(+0.006)_(â 0.004).
We calculate the Bayesian evidence to compare flat and curved ÎCDM and dark-energy CDM models. From the combination of all four probes, we find models with curvature to be at moderately disfavoured with respect to the flat case. A simple dark-energy model is indistinguishable from ÎCDM. Our results therefore do not necessitate any deviations from the standard cosmological model
CFHTLenS: the CanadaâFranceâHawaii Telescope Lensing Survey
We present the CanadaâFranceâHawaii Telescope Lensing Survey (CFHTLenS) that accurately determines a weak gravitational lensing signal from the full 154 deg^2 of deep multicolour data obtained by the CFHT Legacy Survey. Weak gravitational lensing by large-scale structure is widely recognized as one of the most powerful but technically challenging probes of cosmology. We outline the CFHTLenS analysis pipeline, describing how and why every step of the chain from the raw pixel data to the lensing shear and photometric redshift measurement has been revised and improved compared to previous analyses of a subset of the same data. We present a novel method to identify data which contributes a non-negligible contamination to our sample and quantify the required level of calibration for the survey. Through a series of cosmology-insensitive tests we demonstrate the robustness of the resulting cosmic shear signal, presenting a science-ready shear and photometric redshift catalogue for future exploitation
- âŠ